- 
                Notifications
    You must be signed in to change notification settings 
- Fork 14
✨ [open-api-framework#188] add csv option to data dump script #685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
✨ [open-api-framework#188] add csv option to data dump script #685
Conversation
| Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@            Coverage Diff             @@
##           master     #685      +/-   ##
==========================================
+ Coverage   82.33%   83.16%   +0.82%     
==========================================
  Files         130      128       -2     
  Lines        2463     2429      -34     
  Branches      200      193       -7     
==========================================
- Hits         2028     2020       -8     
+ Misses        392      369      -23     
+ Partials       43       40       -3     ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
 | 
cf9c1dc    to
    144633c      
    Compare
  
    | Currently it checks if the  In this project the core app does not have many tables but for openzaak for example it should probably be a zip/tar? | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Floris272 how much work would it be to export it to a .zip? I think that's probably ideal, otherwise let's make sure the CSV output dir always exists
| 
 | 
| Branch | feature/oaf-188-add-csv-option-to-data-dump-script | 
| Testbed | ubuntu-latest | 
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) | 
|---|---|---|---|
| performance_test/tests/test_objects_list.py::test_objects_api_list_filter_one_result | 📈 view plot 🚷 view threshold | 19.44 ms (+0.89%)Baseline: 19.26 ms | 20.23 ms (96.08%) | 
| performance_test/tests/test_objects_list.py::test_objects_api_list_large_page_size_page_1 | 📈 view plot 🚷 view threshold | 236.98 ms (+2.50%)Baseline: 231.19 ms | 242.75 ms (97.62%) | 
| performance_test/tests/test_objects_list.py::test_objects_api_list_large_page_size_page_5 | 📈 view plot 🚷 view threshold | 238.10 ms (+1.82%)Baseline: 233.83 ms | 245.52 ms (96.98%) | 
| performance_test/tests/test_objects_list.py::test_objects_api_list_small_page_size_page_20 | 📈 view plot 🚷 view threshold | 94.45 ms (+0.74%)Baseline: 93.75 ms | 98.44 ms (95.94%) | 
| 
 Didn't take much work, made export directory static and added it to the project, after the zip has been created all .csv files will be deleted in  | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some stuff related to running the command in docker, forgot to test this previously
        
          
                bin/dump_data.sh
              
                Outdated
          
        
      | psql -c "\copy $table TO '$CSV_OUTPUT_DIR/$table.csv' WITH CSV HEADER" | ||
| done | ||
|  | ||
| zip -j "$ZIP_FILE" "$CSV_OUTPUT_DIR"/*.csv | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forgot to test this, but inside the container this won't run currently because the zip command is not found
/dump_data.sh: line 106: zip: command not found
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to tar
| tar -xf dump.tar | ||
| test -f core_object.csv || exit 1 | ||
| ! test -f auth_group.csv || exit 1 | ||
| grep "id,uuid,object_type_id,created_on,modified_on" core_object.csv | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Floris272 could you check if we can upgrade the pg_dump version inside the image to 17? I am running into this issue when trying to dump data with the docker compose setup:
pg_dump: error: aborting because of server version mismatch
pg_dump: detail: server version: 17.5 (Debian 17.5-1.pgdg110+1); pg_dump version: 15.14 (Debian 15.14-0+deb12u1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
slim-bookworm does not know 17, it could be done by adding the postgres apt repo or by updating to slim-trixie (libpcre3 seems to be deprecated).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm that's annoying, but I'd rather not have our docs stating that we support up to postgres 17 and then get complaints when people try this command with postgres 17.
Can you check if we actually need libpcre3 and if so if there is a replacement for it (apparently it's needed for uwsgi logging)? Since trixie is the current stable release of debian I think we could upgrade to that anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment remaining about pg client version
Partially closes maykinmedia/open-api-framework#188
Changes